Moderate: Red Hat Ceph Storage 3.2 security, bug fix, and enhancement update

Related Vulnerabilities: CVE-2018-19039   CVE-2018-19039   CVE-2018-19039  

Synopsis

Moderate: Red Hat Ceph Storage 3.2 security, bug fix, and enhancement update

Type/Severity

Security Advisory: Moderate

Topic

An update is now available for Red Hat Ceph Storage 3.2.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Security Fix(es):

  • grafana: File exfiltration (CVE-2018-19039)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es) and Enhancement(s)

For detailed information on changes in this release, see the Red Hat Ceph
Storage 3.2 Release Notes available at:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3.2/html/release_notes/index

Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Ceph Storage 3 x86_64
  • Red Hat Ceph Storage MON 3 x86_64
  • Red Hat Ceph Storage OSD 3 x86_64
  • Red Hat Ceph Storage for Power 3 ppc64le
  • Red Hat Ceph Storage MON for Power 3 ppc64le
  • Red Hat Ceph Storage OSD for Power 3 ppc64le

Fixes

  • BZ - 1506782 - osd_scrub_auto_repair not working as expected
  • BZ - 1540881 - [CEE/SD] monitor_interface with "-" in the name fails with "msg": "'dict object' has no attribute u'ansible_bond-monitor-interface'"
  • BZ - 1593110 - Ceph mgr daemon crashing after starting balancer module in automatic mode
  • BZ - 1600138 - [Bluestore]: one of the osds flapped multiple times with 1525: FAILED assert(0 == "bluefs enospc")
  • BZ - 1636251 - ceph-keys fails if RHEL is configured in FIPS mode
  • BZ - 1638092 - Default crush rule is not enforced
  • BZ - 1639833 - [RFE] Enabling CRUSH device classes should not incur data movement in the cluster
  • BZ - 1648168 - ceph-validate : devices are not validated in non-collocated and lvm_batch scenario
  • BZ - 1649697 - CVE-2018-19039 grafana: File exfiltration
  • BZ - 1653307 - [ceph-ansible] - lvms not removed while purging cluster
  • BZ - 1656935 - ceph-ansible: purge-cluster.yml fails when initiated second time
  • BZ - 1660962 - rgw does not support delimiter as a string it only supports a single character [consulting]
  • BZ - 1664869 - [RFE] Support configuring multiple RGW endpoints in ceph-ansible for RGW multisite
  • BZ - 1666407 - MDS may hang at startup if PurgeQueue metadata objects are damaged
  • BZ - 1666408 - ceph-fuse may miss reconnect during MDS switch
  • BZ - 1666409 - MDS should allow configuration of heartbeat timeout
  • BZ - 1668050 - [RFE] RGW OPA authorization tech preview
  • BZ - 1668362 - Verify PG recovery control / 3 line items from BB spreadsheet
  • BZ - 1669901 - [RFE] Implement mechanism and command to change/reset bucket objects owner / RGW bucket chown
  • BZ - 1670165 - Bucket lifecycle: bucket is not getting added to lc list when`'NoncurrentVersionExpiration': {'NoncurrentDays': 2}` is set
  • BZ - 1670321 - [GSS] Downloads are corrupted when using RGW with civetweb as frontend
  • BZ - 1670663 - [Ceph-Ansible][ceph-containers] Add new OSD node to the existing ceph cluster is failing with '--limit osds' option
  • BZ - 1672333 - Optimize MDS stale cap revoke behavior
  • BZ - 1672878 - [Ceph-Ansible][ceph-containers] Missing permission for MDS in client.admin
  • BZ - 1673687 - Failure creating ceph.conf for mon - No first item, sequence was empty.
  • BZ - 1674549 - [cee/sd][ceph-mgr] luminous: deadlock in standby ceph-mgr daemons
  • BZ - 1678470 - BlueStore OSD crashes in _do_read - BlueStore::_do_read
  • BZ - 1679263 - radosgw-admin bucket limit check stuck generating high read ops with > 999 buckets per user [Consulting]
  • BZ - 1680171 - containerized radosgw requires higher --cpu-quota as default
  • BZ - 1683997 - permissions in /var/lib/ceph/mon aren't set properly
  • BZ - 1684146 - Ability to start ceph daemons with numactl
  • BZ - 1684283 - Ceph Containers SSL support - Daemons like RGW when using rgw-multisite causing an issue in communication and sync stuck
  • BZ - 1684289 - Testing RGW Multi-site SSL support
  • BZ - 1684435 - Bucket lifecycle: Current version of the object does not get deleted for Tag based filters.
  • BZ - 1684642 - [RFE] rgw-multisite: add perf counters to data sync
  • BZ - 1685733 - MDS may abort when handling deleted file
  • BZ - 1685735 - Monitors will assign standby-replay to degraded ranks
  • BZ - 1687038 - os/filestore: ceph_abort() on fsync(2) or fdatasync(2) failure
  • BZ - 1687039 - osd/PG.cc: account for missing set irrespective of last_complete
  • BZ - 1687041 - mon/OSDMonitor: do not populate void pg_temp into nextmap
  • BZ - 1687567 - rgw: use of PK11_ImportSymKey implies non-FIPS-compliant key management workflow (blocks FIPS)
  • BZ - 1687828 - [cee/sd][ceph-ansible] rolling-update.yml does not restart nvme osds running in containers
  • BZ - 1688330 - Request for backport for fixed issue https://tracker.ceph.com/issues/21533
  • BZ - 1688378 - ops waiting for resharding to complete may not be able to complete when resharding does complete
  • BZ - 1688541 - command `radosgw-admin bi put` not rightly set the mtime
  • BZ - 1688869 - rgw: Lifecyle: handle resharded buckets
  • BZ - 1689266 - rgw: unordered bucket listing markers do not handle adorned object names correctly
  • BZ - 1689410 - s3cmd info not working on Ceph 3.2 (cors policies) giving 500 (Internal Server Error)
  • BZ - 1690941 - Some multipart uploads with SSE-C are corrupted
  • BZ - 1692555 - 'radosgw-admin sync status' does not show timestamps for master zone
  • BZ - 1693445 - rgw-multisite sync stuck recovering shard in already deleted versioned bucket
  • BZ - 1695174 - rgw: fix eval bucket policies and perms permissions for non-existent objects
  • BZ - 1699478 - rgw-multisite: log trimming does not make progress unless zones 'sync_from_all'
  • BZ - 1701970 - Inefficient unordered bucket listing
  • BZ - 1702311 - [cee/sd][ceph-ansible] shink-osd.yml is failing due to missing osd_fsid in " ceph --cluster ceph osd find 0" output

CVEs

References